posted 02-28-2008 04:19 PM
Skip,I've sent you an update. Its only minor changes, but there is a fun new additional algorithm to test the significance of the distribution of artifacts among CQs and RQs.
Let me know if it doesn't arrive or if you have any questions.
This may or may not be what you are thinking about.
http://www.oss3.info/OSS-3_advanced_user_options.html
Its a map of all of the advanced user options and settings for OSS-3.
If you are working with our laboratory model (spreadsheet), then you'll be aware that we have not hidden any of the math, transformations, or procedures.
The only worksheets you really need are the DATA_ENTRY and OSS3_REPORT (formatted for printing. There is also an OSS2_REPORT, which is also formatted for printing.
All of the advanced options are available at the bottom of the DATA_ENTRY worksheet. All of them can be changed and reset to default as much as you like.
We could easily hide all of the other worksheets for you, but have left them visible in case you are interested or cannot sleep (its cheaper than Ambien, and safer than Lunexa).
quote:
Everything below those first three lines becomes a bit blurry. So, without fear of looking any more ignorant than I already do:What is a p value?
What is a good p value and what is a bad p value?
Does a higher p-value mean the person is more likely lying or telling the truth?
In the Decision Alpha (1 tailed), what is the significance of the numbers under NSR, SR and Bonferonni corrected alpha? Again does a bigger number mean more DI or more NDI?
Under Results Weighted Mean and Grand Mean, does a bigger number or a smaller number mean more DI?
We are admittedly pushing everyone with the content, concepts, and options with OSS-3. Part of that is on purpose. Though painful, it can only help in the long-run to become more familiar and conversant with these concepts, as they are familiar to the geeks and propellerheads, and some anti-poly folks.
p-value is short for probability-value. In OSS-3, and other Gaussian models, smaller p-values are better.
Good p-values are anything less than alpha. Bad p-values are anything greater than alpha.
Alpha, is the predermined acceptable Type-1 error rate (likelihood of assuming something is significant when it is not).
Because alpha pertains to tolerance for risk or error, and also affects the Type-2 error rate (likelihood of not noticing something that is actually significant), and Inconclusive rate, alpha is as much a matter of policy as it is a matter of science. Commonly employed alpha settings are. .05, .01, .001, and .1, which correspond to Type-1 error rates of 1 in 20, 1 in 100, 1 in 1000, and 1 in 10.
OSS-3 allows you to set alpha according to your individual or agency needs. We have found that an assymetrical alpha solution serves to both reduce inconclusive, and optimize the balance of sensitivity to deception and specificity to truthfulness. This is exciting, in a geeky sort of way, because polygraph specificity rates have not always been impressive.
Because OSS-3 is a Gaussian (two distribution) model, smaller p-values are always better, regardless of whether a test result is truthful or deceptive. Buried in the math, you'll find that if there is a small p-value for the deceptive distribution (truthful result), there will be a large p-value for the truthful distribution. Or if there is a small p-value for the truthful distribution (deceptive result) there will be a large p-value for the deceptive distribution. Gaussian theory doesn't care where those larger p-values occur (how large) – it doesn't matter. What matters is the small p-values, because they allow you to determine that someone test result is most likely not represented well by the known model (normative distribution data) for either truthful or deceptive persons. So, smaller p-values are indicative of a lower likelihood of error whether your result is SR or NSR. The algorithm will give you the p-value from the correct normative distribution. Just remember smaller is better – with p-values.
Other scoring algorithms have seemed to use single distribution models (i.e, Polyscore), and CPS makes a uniform assumption about its probability model.
Bonferonni is a well known (among geeks, that is) procedure for reducing the likelihood of a known phenomena of an increased likelihood of a Type-1 error when conducting multiple simultaneous significance tests (i.e., scoring several RQs as separate spots).
Results at the bottom, under or near the Grand Mean score, can be imagined to approximate hand scores (only with decimals, not integers). Minus = deceptive. Plus = truthful. The numbers will occur between -3 and +3, only they represent standard deviations in the normative data. There will be no attempt to make this represent hand scoring as that would only degrade the math. Its an approximation only, but its useful, so we wanted it on the report.
I believe the measurement data are in Inches if you are using the Extract tool. Limestone allows Inches or millimeters.
r
------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)